11 research outputs found
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations
To help their users to discover important items at a particular time, major
websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K
recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most
Viewed News Stories), which rely on crowdsourced popularity signals to select
the items. However, different sections of a crowd may have different
preferences, and there is a large silent majority who do not explicitly express
their opinion. Also, the crowd often consists of actors like bots, spammers, or
people running orchestrated campaigns. Recommendation algorithms today largely
do not consider such nuances, hence are vulnerable to strategic manipulation by
small but hyper-active user groups.
To fairly aggregate the preferences of all users while recommending top-K
items, we borrow ideas from prior research on social choice theory, and
identify a voting mechanism called Single Transferable Vote (STV) as having
many of the fairness properties we desire in top-K item (s)elections. We
develop an innovative mechanism to attribute preferences of silent majority
which also make STV completely operational. We show the generalizability of our
approach by implementing it on two different real-world datasets. Through
extensive experimentation and comparison with state-of-the-art techniques, we
show that our proposed approach provides maximum user satisfaction, and cuts
down drastically on items disliked by most but hyper-actively promoted by a few
users.Comment: In the proceedings of the Conference on Fairness, Accountability, and
Transparency (FAT* '19). Please cite the conference versio
Incremental Fairness in Two-Sided Market Platforms: On Smoothly Updating Recommendations
Major online platforms today can be thought of as two-sided markets with
producers and customers of goods and services. There have been concerns that
over-emphasis on customer satisfaction by the platforms may affect the
well-being of the producers. To counter such issues, few recent works have
attempted to incorporate fairness for the producers. However, these studies
have overlooked an important issue in such platforms -- to supposedly improve
customer utility, the underlying algorithms are frequently updated, causing
abrupt changes in the exposure of producers. In this work, we focus on the
fairness issues arising out of such frequent updates, and argue for incremental
updates of the platform algorithms so that the producers have enough time to
adjust (both logistically and mentally) to the change. However, naive
incremental updates may become unfair to the customers. Thus focusing on
recommendations deployed on two-sided platforms, we formulate an ILP based
online optimization to deploy changes incrementally in n steps, where we can
ensure smooth transition of the exposure of items while guaranteeing a minimum
utility for every customer. Evaluations over multiple real world datasets show
that our proposed mechanism for platform updates can be efficient and fair to
both the producers and the customers in two-sided platforms.Comment: To Appear In the Proceedings of 34th AAAI Conference on Artificial
Intelligence (AAAI), New York, USA, Feb 202
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations
International audienceTo help their users to discover important items at a particular time, major websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most Viewed News Stories), which rely on crowd-sourced popularity signals to select the items. However, diferent sections of a crowd may have diferent preferences, and there is a large silent majority who do not explicitly express their opinion. Also, the crowd often consists of actors like bots, spammers, or people running orchestrated campaigns. Recommendation algorithms today largely do not consider such nuances, hence are vulnerable to strategic manipulation by small but hyper-active user groups. To fairly aggregate the preferences of all users while recommending top-K items, we borrow ideas from prior research on social choice theory, and identify a voting mechanism called Single Trans-ferable Vote (STV) as having many of the fairness properties we desire in top-K item (s)elections. We develop an innovative mechanism to attribute preferences of silent majority which also make STV completely operational. We show the generalizability of our approach by implementing it on two diferent real-world datasets. Through extensive experimentation and comparison with state-of-the-art techniques, we show that our proposed approach provides maximum user satisfaction, and cuts down drastically on items disliked by most but hyper-actively promoted by a few users
A Review of the Role of Causality in Developing Trustworthy AI Systems
State-of-the-art AI models largely lack an understanding of the cause-effect
relationship that governs human understanding of the real world. Consequently,
these models do not generalize to unseen data, often produce unfair results,
and are difficult to interpret. This has led to efforts to improve the
trustworthiness aspects of AI models. Recently, causal modeling and inference
methods have emerged as powerful tools. This review aims to provide the reader
with an overview of causal methods that have been developed to improve the
trustworthiness of AI models. We hope that our contribution will motivate
future research on causality-based solutions for trustworthy AI.Comment: 55 pages, 8 figures. Under revie
Two-Sided Fairness in Non-Personalised Recommendations (Student Abstract)
Recommender systems are one of the most widely used services on several online platforms to suggest potential items to the end-users. These services often use different machine learning techniques for which fairness is a concerning factor, especially when the downstream services have the ability to cause social ramifications. Thus, focusing on the non-personalised (global) recommendations in news media platforms (e.g., top-k trending topics on Twitter, top-k news on a news platform, etc.), we discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness. While user fairness captures the idea of representing the choices of all the individual users in the case of global recommendations, organisational fairness tries to ensure politically/ideologically balanced recommendation sets. This makes user fairness a user-side requirement and organisational fairness a platform-side requirement. For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results. Even in our application of voting rules to the recommendation setup, we observe high user satisfaction scores. Now for organisational fairness, we propose a bias metric which measures the aggregate ideological bias of a recommended set of items (articles). Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values and clearly not suitable for organisational requirements of the platforms. Thus, there is a need to build an encompassing mechanism by cohesively bridging ideas of user fairness and organisational fairness. In this abstract paper, we intend to frame the elementary ideas along with the clear motivation behind the requirement of such a mechanism
Fairness implications of encoding protected categorical attributes
Protected attributes are often presented as categorical features that need to
be encoded before feeding them into a machine learning algorithm. Encoding
these attributes is paramount as they determine the way the algorithm will
learn from the data. Categorical feature encoding has a direct impact on the
model performance and fairness. In this work, we compare the accuracy and
fairness implications of the two most well-known encoders: one-hot encoding and
target encoding. We distinguish between two types of induced bias that can
arise while using these encodings and can lead to unfair models. The first
type, irreducible bias, is due to direct group category discrimination and a
second type, reducible bias, is due to large variance in less statistically
represented groups. We take a deeper look into how regularization methods for
target encoding can improve the induced bias while encoding categorical
features. Furthermore, we tackle the problem of intersectional fairness that
arises when mixing two protected categorical features leading to higher
cardinality. This practice is a powerful feature engineering technique used for
boosting model performance. We study its implications on fairness as it can
increase both types of induced biasComment: 22 page
Fairness in agreement with European values: An interdisciplinary perspective on ai regulation
With increasing digitalization, Artificial Intelligence (AI) is becoming ubiquitous. AI-based systems to identify, optimize, automate, and scale solutions to complex economic and societal problems are being proposed and implemented. This has motivated regulation efforts, including the Proposal of an EU AI Act. This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them, focusing on (but not limited to) the Proposal. We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives. Then, we map these perspectives along three axes of interests: (i) Standardization vs. Localization, (ii) Utilitarianism vs. Egalitarianism, and (iii) Consequential vs. Deontological ethics which leads us to identify a pattern of common arguments and tensions between these axes. Positioning the discussion within the axes of interest and with a focus on reconciling the key tensions, we identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns. </p